928 research outputs found

    Network Creation Games: Think Global - Act Local

    Full text link
    We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC'03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bil\`o et al., SPAA'14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy-changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the price of anarchy.Comment: An extended abstract of this paper has been accepted for publication in the proceedings of the 40th International Conference on Mathematical Foundations on Computer Scienc

    Online unit clustering in higher dimensions

    Full text link
    We revisit the online Unit Clustering and Unit Covering problems in higher dimensions: Given a set of nn points in a metric space, that arrive one by one, Unit Clustering asks to partition the points into the minimum number of clusters (subsets) of diameter at most one; while Unit Covering asks to cover all points by the minimum number of balls of unit radius. In this paper, we work in Rd\mathbb{R}^d using the LL_\infty norm. We show that the competitive ratio of any online algorithm (deterministic or randomized) for Unit Clustering must depend on the dimension dd. We also give a randomized online algorithm with competitive ratio O(d2)O(d^2) for Unit Clustering}of integer points (i.e., points in Zd\mathbb{Z}^d, dNd\in \mathbb{N}, under LL_{\infty} norm). We show that the competitive ratio of any deterministic online algorithm for Unit Covering is at least 2d2^d. This ratio is the best possible, as it can be attained by a simple deterministic algorithm that assigns points to a predefined set of unit cubes. We complement these results with some additional lower bounds for related problems in higher dimensions.Comment: 15 pages, 4 figures. A preliminary version appeared in the Proceedings of the 15th Workshop on Approximation and Online Algorithms (WAOA 2017

    Designing cost-sharing methods for Bayesian games

    Get PDF
    We study the design of cost-sharing protocols for two fundamental resource allocation problems, the Set Cover and the Steiner Tree Problem, under environments of incomplete information (Bayesian model). Our objective is to design protocols where the worst-case Bayesian Nash equilibria, have low cost, i.e. the Bayesian Price of Anarchy (PoA) is minimized. Although budget balance is a very natural requirement, it puts considerable restrictions on the design space, resulting in high PoA. We propose an alternative, relaxed requirement called budget balance in the equilibrium (BBiE).We show an interesting connection between algorithms for Oblivious Stochastic optimization problems and cost-sharing design with low PoA. We exploit this connection for both problems and we enforce approximate solutions of the stochastic problem, as Bayesian Nash equilibria, with the same guarantees on the PoA. More interestingly, we show how to obtain the same bounds on the PoA, by using anonymous posted prices which are desirable because they are easy to implement and, as we show, induce dominant strategies for the players

    On the Trace of the Real Author

    Get PDF
    In pre-Revival Croatian literature there are works that so far have not been ascribed to any particular author. It is now clear that their real authors can not be identified simplyon the basis of general stylistic impression, as the late 19th century scholar Armin Pavić believed. The approach of Kolendić, who at the start of this century introduced the method of hapaxes (words evidenced only in the corpus of one known author) seemed much more promising. Trying to prove Vetranović‚s authorship of a part of the mythological drama Orfeo, he pointed out several words for which he claimed to be the hapaxes of the said poet. Even if the tenability of his conclusions about the Orfeo can be easily dismissed simply by using the Historical Dictionary of Croatian Language, the national literary historiography has accepted Kolendić‚s attribution. However, another attribution, based on the same method and proposed by the author of the present article, was rejected.Namely, after having found hapaxes of Zoranić‚s Planine in a pastoral eclogue by an unknown author, he attributed the eclogue to the same poet. The conclusion is self-evident. Every new method should be thoroughly tested. But, if no objection is found, in the following period it must be valid for all the cases, wherever it can be competently applied

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands

    Protein disulfide-isomerase interacts with a substrate protein at all stages along its folding pathway

    Get PDF
    In contrast to molecular chaperones that couple protein folding to ATP hydrolysis, protein disulfide-isomerase (PDI) catalyzes protein folding coupled to formation of disulfide bonds (oxidative folding). However, we do not know how PDI distinguishes folded, partly-folded and unfolded protein substrates. As a model intermediate in an oxidative folding pathway, we prepared a two-disulfide mutant of basic pancreatic trypsin inhibitor (BPTI) and showed by NMR that it is partly-folded and highly dynamic. NMR studies show that it binds to PDI at the same site that binds peptide ligands, with rapid binding and dissociation kinetics; surface plasmon resonance shows its interaction with PDI has a Kd of ca. 10−5 M. For comparison, we characterized the interactions of PDI with native BPTI and fully-unfolded BPTI. Interestingly, PDI does bind native BPTI, but binding is quantitatively weaker than with partly-folded and unfolded BPTI. Hence PDI recognizes and binds substrates via permanently or transiently unfolded regions. This is the first study of PDI's interaction with a partly-folded protein, and the first to analyze this folding catalyst's changing interactions with substrates along an oxidative folding pathway. We have identified key features that make PDI an effective catalyst of oxidative protein folding – differential affinity, rapid ligand exchange and conformational flexibility

    Depsipeptide substrates for sortase-mediated N-terminal protein ligation

    Get PDF
    Technologies that allow the efficient chemical modification of proteins under mild conditions are widely sought after. Sortase-mediated peptide ligation provides a strategy for modifying the N or C terminus of proteins. This protocol describes the use of depsipeptide substrates (containing an ester linkage) with sortase A (SrtA) to completely modify proteins carrying a single N-terminal glycine residue under mild conditions in 4–6 h. The SrtA-mediated ligation reaction is reversible, so most labeling protocols that use this enzyme require a large excess of both substrate and sortase to produce high yields of ligation product. In contrast, switching to depsipeptide substrates effectively renders the reaction irreversible, allowing complete labeling of proteins with a small excess of substrate and catalytic quantities of sortase. Herein we describe the synthesis of depsipeptide substrates that contain an ester linkage between a threonine and glycolic acid residue and an N-terminal FITC fluorophore appended via a thiourea linkage. The synthesis of the depsipeptide substrate typically takes 2–3 d

    A Machine Learning Trainable Model to Assess the Accuracy of Probabilistic Record Linkage

    Get PDF
    Record linkage (RL) is the process of identifying and linking data that relates to the same physical entity across multiple heterogeneous data sources. Deterministic linkage methods rely on the presence of common uniquely identifying attributes across all sources while probabilistic approaches use non-unique attributes and calculates similarity indexes for pair wise comparisons. A key component of record linkage is accuracy assessment — the process of manually verifying and validating matched pairs to further refine linkage parameters and increase its overall effectiveness. This process however is time-consuming and impractical when applied to large administrative data sources where millions of records must be linked. Additionally, it is potentially biased as the gold standard used is often the reviewer’s intuition. In this paper, we present an approach for assessing and refining the accuracy of probabilistic linkage based on different supervised machine learning methods (decision trees, naïve Bayes, logistic regression, random forest, linear support vector machines and gradient boosted trees). We used data sets extracted from huge Brazilian socioeconomic and public health care data sources. These models were evaluated using receiver operating characteristic plots, sensitivity, specificity and positive predictive values collected from a 10-fold cross-validation method. Results show that logistic regression outperforms other classifiers and enables the creation of a generalized, very accurate model to validate linkage results

    Lines of Descent: Kuhn and Beyond

    Get PDF
    yesThomas S. Kuhn is famous both for his work on the Copernican Revolution and his ‘paradigm’ view of scientific revolutions. But Kuhn later abandoned the notion of paradigm (and related notions) in favour of a more ‘evolutionary’ view of the history of science. Kuhn’s position therefore moved closer to ‘continuity’ models of scientific progress, for instance ‘chain-of-reasoning’ models, originally championed by D. Shapere. The purpose of this paper is to contribute to the debate around Kuhn’s new ‘developmental’ view and to evaluate these competing models with reference to some major innovations in the history of cosmology, from Copernicanism to modern cosmology. This evaluation is made possible through some unexpected overlap between Kuhn’s earlier discontinuity model and various versions of the later continuity models. It is the thesis of this paper that the ‘chain-of-reasoning’ model accounts better for the cosmological evidence than both Kuhn’s early paradigm model and his later developmental view of the history of science

    Actual and undiagnosed HIV prevalence in a community sample of men who have sex with men in Auckland, New Zealand

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The prevalence of HIV infection and how this varies between subgroups is a fundamental indicator of epidemic control. While there has been a rise in the number of HIV diagnoses among men who have sex with men (MSM) in New Zealand over the last decade, the actual prevalence of HIV and the proportion undiagnosed is not known. We measured these outcomes in a community sample of MSM in Auckland, New Zealand.</p> <p>Methods</p> <p>The study was embedded in an established behavioural surveillance programme. MSM attending a gay community fair day, gay bars and sex-on-site venues during 1 week in February 2011 who agreed to complete a questionnaire were invited to provide an anonymous oral fluid specimen for analysis of HIV antibodies. From the 1304 eligible respondents (acceptance rate 48.5%), 1049 provided a matched specimen (provision rate 80.4%).</p> <p>Results</p> <p>HIV prevalence was 6.5% (95% CI: 5.1-8.1). After adjusting for age, ethnicity and recruitment site, HIV positivity was significantly elevated among respondents who were aged 30-44 or 45 and over, were resident outside New Zealand, had 6-20 or more than 20 recent sexual partners, had engaged in unprotected anal intercourse with a casual partner, had had sex with a man met online, or had injected drugs in the 6 months prior to survey. One fifth (20.9%) of HIV infected men were undiagnosed; 1.3% of the total sample. Although HIV prevalence did not differ by ethnicity, HIV infected non-European respondents were more likely to be undiagnosed. Most of the small number of undiagnosed respondents had tested for HIV previously, and the majority believed themselves to be either "definitely" or "probably" uninfected. There was evidence of continuing risk practices among some of those with known HIV infection.</p> <p>Conclusions</p> <p>This is the first estimate of actual and undiagnosed HIV infection among a community sample of gay men in New Zealand. While relatively low compared to other countries with mature epidemics, HIV prevalence was elevated in subgroups of MSM based on behaviour, and diagnosis rates varied by ethnicity. Prevention should focus on raising condom use and earlier diagnosis among those most at risk, and encouraging safe behaviour after diagnosis.</p
    corecore